dexterous manipulation
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Montana (0.04)
H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation
Human hands possess remarkable dexterity and have long served as a source of inspiration for robotic manipulation. In this work, we propose a human $\textbf{H}$and-$\textbf{In}$formed visual representation learning framework to solve difficult $\textbf{Dex}$terous manipulation tasks ($\textbf{H-InDex}$) with reinforcement learning. Our framework consists of three stages: $\textit{(i)}$ pre-training representations with 3D human hand pose estimation, $\textit{(ii)}$ offline adapting representations with self-supervised keypoint detection, and $\textit{(iii)}$ reinforcement learning with exponential moving average BatchNorm. The last two stages only modify $0.36$% parameters of the pre-trained representation in total, ensuring the knowledge from pre-training is maintained to the full extent. We empirically study $\textbf{12}$ challenging dexterous manipulation tasks and find that $\textbf{H-InDex}$ largely surpasses strong baseline methods and the recent visual foundation models for motor control. Code and videos are available at https://yanjieze.com/H-InDex .
Learning Dexterous Manipulation Skills from Imperfect Simulations
Hsieh, Elvis, Hsieh, Wen-Han, Wang, Yen-Jen, Lin, Toru, Malik, Jitendra, Sreenath, Koushil, Qi, Haozhi
Figure 1: We propose DexScrew, a sim-to-real framework for learning dexterous manipulation skills when the environment cannot be accurately simulated. In simulation, we use simplified objects to learn transferable rotational skills, which are then used to collect data and train tactile policies in the real world. We demonstrate the framework on contact-rich screwdriving (top row) and nut-bolt fastening (middle row). We also show generalization across different objects (bottom row). More videos and code are available on https://dexscrew.github.io. Abstract-- Reinforcement learning and sim-to-real transfer have made significant progress in dexterous manipulation. However, progress remains limited by the difficulty of simulating complex contact dynamics and multisensory signals, especially tactile feedback. In this work, we propose DexScrew, a sim-to-real framework that addresses these limitations and demonstrates its effectiveness on nut-bolt fastening and screwdriving with multi-fingered hands. The framework has three stages. First, we train reinforcement learning policies in simulation using simplified object models that lead to the emergence of correct finger gaits. We then use the learned policy as a skill primitive within a teleoperation system to collect real-world demonstrations that contain tactile and proprioceptive information. Finally, we train a behavior cloning policy that incorporates tactile sensing and show that it generalizes to nuts and screwdrivers with diverse geometries. Experiments across both tasks show high task progress ratios compared to direct sim-to-real transfer and robust performance even on unseen object shapes and under external perturbations.
METIS: Multi-Source Egocentric Training for Integrated Dexterous Vision-Language-Action Model
Fu, Yankai, Chen, Ning, Zhao, Junkai, Shan, Shaozhe, Yao, Guocai, Wang, Pengwei, Wang, Zhongyuan, Zhang, Shanghang
Building a generalist robot that can perceive, reason, and act across diverse tasks remains an open challenge, especially for dexterous manipulation. A major bottleneck lies in the scarcity of large-scale, action-annotated data for dexterous skills, as teleoperation is difficult and costly. Human data, with its vast scale and diverse manipulation behaviors, provides rich priors for learning robotic actions. While prior works have explored leveraging human demonstrations, they are often constrained by limited scenarios and a large visual gap between human and robots. To eliminate these limitations, we propose METIS, a vision-language-action (VLA) model for dexterous manipulation pretrained on multi-source egocentric datasets. We first construct EgoAtlas, which integrates large-scale human and robotic data from multiple sources, all unified under a consistent action space. We further extract motion-aware dynamics, a compact and discretized motion representation, which provides efficient and expressive supervision for VLA training. Built upon them, METIS integrates reasoning and acting into a unified framework, enabling effective deployment to downstream dexterous manipulation tasks. Our method demonstrates exceptional dexterous manipulation capabilities, achieving highest average success rate in six real-world tasks. Experimental results also highlight the superior generalization and robustness to out-of-distribution scenarios. These findings emphasize METIS as a promising step toward a generalist model for dexterous manipulation.
- North America > Montserrat (0.04)
- Asia > China > Beijing > Beijing (0.04)
The Developments and Challenges towards Dexterous and Embodied Robotic Manipulation: A Survey
Li, Gaofeng, Wang, Ruize, Xu, Peisen, Ye, Qi, Chen, Jiming
Achieving human-like dexterous robotic manipulation remains a central goal and a pivotal challenge in robotics. The development of Artificial Intelligence (AI) has allowed rapid progress in robotic manipulation. This survey summarizes the evolution of robotic manipulation from mechanical programming to embodied intelligence, alongside the transition from simple grippers to multi-fingered dexterous hands, outlining key characteristics and main challenges. Focusing on the current stage of embodied dexterous manipulation, we highlight recent advances in two critical areas: dexterous manipulation data collection (via simulation, human demonstrations, and teleoperation) and skill-learning frameworks (imitation and reinforcement learning). Then, based on the overview of the existing data collection paradigm and learning framework, three key challenges restricting the development of dexterous robotic manipulation are summarized and discussed.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Montana (0.04)
- (2 more...)
- Overview (0.88)
- Research Report (0.64)
- Education (1.00)
- Health & Medicine (0.93)
Dexterous Manipulation Transfer via Progressive Kinematic-Dynamic Alignment
Bai, Wenbin, Chen, Qiyu, Lin, Xiangbo, Li, Jianwen, Li, Quancheng, Pan, Hejiang, Sun, Yi
The inherent difficulty and limited scalability of collecting manipulation data using multi-fingered robot hand hardware platforms have resulted in severe data scarcity, impeding research on data-driven dexterous manipulation policy learning. To address this challenge, we present a hand-agnostic manipulation transfer system. It efficiently converts human hand manipulation sequences from demonstration videos into high-quality dexterous manipulation trajectories without requirements of massive training data. To tackle the multi-dimensional disparities between human hands and dexterous hands, as well as the challenges posed by high-degree-of-freedom coordinated control of dexterous hands, we design a progressive transfer framework: first, we establish primary control signals for dexterous hands based on kinematic matching; subsequently, we train residual policies with action space rescaling and thumb-guided initialization to dynamically optimize contact interactions under unified rewards; finally, we compute wrist control trajectories with the objective of preserving operational semantics. Using only human hand manipulation videos, our system automatically configures system parameters for different tasks, balancing kinematic matching and dynamic optimization across dexterous hands, object categories, and tasks. Extensive experimental results demonstrate that our framework can automatically generate smooth and semantically correct dexterous hand manipulation that faithfully reproduces human intentions, achieving high efficiency and strong generalizability with an average transfer success rate of 73%, providing an easily implementable and scalable method for collecting robot dexterous manipulation data.
- North America > United States (0.04)
- Asia > China > Liaoning Province > Shenyang (0.04)
- Asia > China > Liaoning Province > Dalian (0.04)
Dexterous Robotic Piano Playing at Scale
Chen, Le, Zhao, Yi, Schneider, Jan, Gao, Quankai, Guist, Simon, Qian, Cheng, Kannala, Juho, Schölkopf, Bernhard, Pajarinen, Joni, Büchler, Dieter
This work has been submitted to the IEEE for possible publication. Abstract--Endowing robot hands with human-level dexterity has been a long-standing goal in robotics. Bimanual robotic piano playing represents a particularly challenging task: it is high-dimensional, contact-rich, and requires fast, precise control. Our approach is built on three core components. First, we introduce an automatic fingering strategy based on Optimal Transport (OT), allowing the agent to autonomously discover efficient piano-playing strategies from scratch without demonstrations. Second, we conduct large-scale Reinforcement Learning (RL) by training more than 2,000 agents, each specialized in distinct music pieces, and aggregate their experience into a dataset named RP1M++, consisting of over one million trajectories for robotic piano playing. Extensive experiments and ablation studies highlight the effectiveness and scalability of our approach, advancing dexterous robotic piano playing at scale. Achieving human-level dexterity remains one of the central challenges in robotics. The difficulty stems from the breadth of challenges ranging from contact-rich manipulation to dynamic athletic tasks, each posing distinct demands. Manipulation tasks, such as grasping or reorienting objects [1], require sustained application of appropriate forces at moderate speeds across objects with diverse shapes, materials, and weight distributions. Dynamic tasks, such as juggling [2] or table tennis [3], involve frequent contact changes, demand high precision, and allow little tolerance for error due to the rarity of contact opportunities. The combination of requiring both precision and speed makes reproducing human-level dexterity particularly challenging. Q. Gao is with the University of Southern California, CA 90007, United States (e-mail: quankaig@usc.edu). Q. Cheng is with Imperial College London, SW7 2AZ, London, United Kingdom (e-mail: c.qian24@imperial.ac.uk). J. Kannala is with the University of Oulu, 90570 Oulu, Finland. D. B uchler is also with the University of Alberta (Canada), the Alberta Machine Intelligence Institute (Amii), & holds a Canada CIFAR AI Chair.
- North America > Canada > Alberta (0.74)
- North America > United States > California (0.54)
- Europe > Finland > Northern Ostrobothnia > Oulu (0.44)
- (8 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Education > Educational Setting > Higher Education (0.54)
- Information Technology > Artificial Intelligence > Robots > Manipulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
DexCanvas: Bridging Human Demonstrations and Robot Learning for Dexterous Manipulation
Xu, Xinyue, Sun, Jieqiang, Jing, null, Dai, null, Chen, Siyuan, Ma, Lanjie, Sun, Ke, Zhao, Bin, Yuan, Jianbo, Yi, Sheng, Zhu, Haohua, Lu, Yiwen
We present DexCanvas, a large-scale hybrid real-synthetic human manipulation dataset containing 7,000 hours of dexterous hand-object interactions seeded from 70 hours of real human demonstrations, organized across 21 fundamental manipulation types based on the Cutkosky taxonomy (Feix et al., 2016). Each entry combines synchronized multi-view RGB-D, high-precision mocap with MANO hand parameters, and per-frame contact points with physically consistent force profiles. Our real-to-sim pipeline uses reinforcement learning to train policies that control an actuated MANO hand in physics simulation, reproducing human demonstrations while discovering the underlying contact forces that generate the observed object motion. DexCanvas is the first manipulation dataset to combine large-scale real demonstrations, systematic skill coverage based on established taxonomies, and physics-validated contact annotations. The dataset can facilitate research in robotic manipulation learning, contact-rich control, and skill transfer across different hand morphologies. Dexterous manipulation with high-DoF anthropomorphic hands is fundamental to robot learning: it enables the most general form of object interaction and is essential for robots to achieve human-level autonomy in unstructured environments (Y u & Wang, 2022; Ozawa & Tahara, 2017). The field has witnessed rapid advancement along two dimensions: diverse learning paradigms including reinforcement learning for contact-rich control (Chen et al., 2024; 2023) and diffusion-based methods for handling multimodal action distributions (Weng et al., 2024; Wu et al., 2024), alongside dramatic scale expansion from task-specific models to billion-parameter foundation models (Wen et al., 2025; Kim et al., 2024; Zitkovich et al., 2023). However, current flagship manipulation systems predominantly rely on parallel-jaw grippers, while generalizable control of anthropomorphic hands remains limited to simulation or narrow real-world scenarios. This gap highlights an opportunity: to unlock the full potential of dexterous manipulation, we need large-scale datasets that capture diverse human manipulation strategies with physically accurate contact dynamics and force profiles, the crucial signals for learning robust dexterous control. Building such datasets requires careful consideration of data sources and collection methodologies. The choice between robot-generated and human-sourced data presents fundamental tradeoffs for learning manipulation.
- North America > United States > Michigan (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
RAPID Hand Prototype: Design of an Affordable, Fully-Actuated Biomimetic Hand for Dexterous Teleoperation
Wan, Zhaoliang, Zhou, Zida, Bi, Zetong, Yang, Zehui, Ding, Hao, Cheng, Hui
This paper addresses the scarcity of affordable, fully-actuated five-fingered hands for dexterous teleoperation, which is crucial for collecting large-scale real-robot data within the "Learning from Demonstrations" paradigm. We introduce the prototype version of the RAPID Hand, the first low-cost, 20-degree-of-actuation (DoA) dexterous hand that integrates a novel anthropomorphic actuation and transmission scheme with an optimized motor layout and structural design to enhance dexterity. Specifically, the RAPID Hand features a universal phalangeal transmission scheme for the non-thumb fingers and an omnidirectional thumb actuation mechanism. Prioritizing affordability, the hand employs 3D-printed parts combined with custom gears for easier replacement and repair. We assess the RAPID Hand's performance through quantitative metrics and qualitative testing in a dexterous teleoperation system, which is evaluated on three challenging tasks: multi-finger retrieval, ladle handling, and human-like piano playing. The results indicate that the RAPID Hand's fully actuated 20-DoF design holds significant promise for dexterous teleoperation.